-
Notifications
You must be signed in to change notification settings - Fork 211
[NeuralChat] Enable RAG's table extraction and summary #1417
base: main
Are you sure you want to change the base?
Conversation
⚡ Required checks status: All passing 🟢Groups summary🟢 Format Scan Tests workflow
These checks are required after the changes to 🟢 NeuralChat Unit Test
These checks are required after the changes to 🟢 Chat Bot Test workflow
These checks are required after the changes to Thank you for your contribution! 💜
|
47dde9c
to
22731d5
Compare
please add Installation and instruction for pdf table-to-text in intel_extension_for_transformers/neural_chat/pipeline/plugins/retrieval/README.md |
return result | ||
|
||
tables_result = [] | ||
def get_relation(table_coords, caption_coords, table_page_number, caption_page_number, threshold=100): |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
Signed-off-by: Manxin Xu <[email protected]>
Signed-off-by: Manxin Xu <[email protected]>
for more information, see https://pre-commit.ci Signed-off-by: Manxin Xu <[email protected]>
Signed-off-by: Manxin Xu <[email protected]>
Signed-off-by: Manxin Xu <[email protected]>
Signed-off-by: Manxin Xu <[email protected]>
Signed-off-by: Manxin Xu <[email protected]>
8da7b00
to
81ca43d
Compare
Signed-off-by: Manxin Xu <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Chen Xi <[email protected]>
Signed-off-by: Manxin Xu <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Chen Xi <[email protected]>
@@ -92,6 +92,7 @@ Below are the description for the available parameters in `agent_QA`, | |||
| enable_rerank | bool | Whether to enable retrieval then rerank pipeline |True, False| | |||
| reranker_model | str | The name of the reranker model from the Huggingface or a local path |-| | |||
| top_n | int | The return number of the reranker model |-| | |||
| table_strategy | str | The strategies to understand tables for table retrieval. As the setting progresses from "fast" to "hq" to "llm," the focus shifts towards deeper table understanding at the expense of processing speed. The default strategy is "fast" |"fast", "hq", "llm"| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the code, seems "fast" table_strategy would only return None instead of table content, is this somewhat unreasonable?
It appears "hq" strategy uses unstructured pkg to extract table, I also used this pkg, and find it actually performed worse than table-transformer.
Also does the "llm" strategy return the reliable table contents? From the code, looks like it uses LLM and a prompt to generate the table summarization of the document, but from my previous experience, such way would generate results that significantly deviate the table content sometimes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for insightful comments, my opinion on these issues are as follows:
From the code, seems "fast" table_strategy would only return None instead of table content, is this somewhat unreasonable?
In fact, by default, our program will use OCR to extract all text information in files including table information, which has been implemented in other PRs. This PR is just to further enhance the understanding of the table, so no content is returned in fast mode (fast mode is also the default mode).
It appears "hq" strategy uses unstructured pkg to extract table, I also used this pkg, and find it actually performed worse than table-transformer.
At present, we do use unstructured to extract table information, and the extraction performance is quite satisfactory. We have not tried the table transformer, but it is indeed worth considering.
Also does the "llm" strategy return the reliable table contents? From the code, looks like it uses LLM and a prompt to generate the table summarization of the document, but from my previous experience, such way would generate results that significantly deviate the table content sometimes.
Your understanding of what llm mode does is correct. It is true that llm's table summary is not completely reliable, but according to the experimental results, there will be much better table QA performance in llm mode overall.
Type of Change
feature
API changed
Description
Enable RAG's table extraction functionality for pdf
Enable RAG's table summary functionality, with three modes to choose: [none, title, llm]
Expected Behavior & Potential Risk
User can use RAG's table extraction and summary functionality to get better RAG experience
How has this PR been tested?
Local test and pre-CI
Dependency Change?
add
tesseract
dependencyadd
poppler
dependencychange
unstructured
dependencyunstructured[all-docs]
dependency